skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Blandin, Jack"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Defining fairness in algorithmic contexts is challenging, particularly when adapting to new domains. Our research introduces a novel method for learning and applying group fairness preferences across different classification domains, without the need for manual fine-tuning. Utilizing concepts from inverse reinforcement learning (IRL), our approach enables the extraction and application of fairness preferences from human experts or established algorithms. We propose the first technique for using IRL to recover and adapt group fairness preferences to new domains, offering a low-touch solution for implementing fair classifiers in settings where expert-established fairness tradeoffs are not yet defined. 
    more » « less
  2. Recent works extend classification group fairness measures to sequential decision processes such as reinforcement learning (RL) by measuring fairness as the difference in decisionmaker utility (e.g. accuracy) of each group. This approach suffers when decision-maker utility is not perfectly aligned with group utility, such as in repeat loan applications where a false positive (loan default) impacts the groups (applicants) and decision-maker (lender) by different magnitudes. Some works remedy this by measuring fairness in terms of group utility, typically referred to as their "qualification", but few works offer solutions that yield group qualification equality. Those that do are prone to violating the "no-harm" principle where one or more groups’ qualifications are lowered in order to achieve equality. In this work, we characterize this problem space as having three implicit objectives: maximizing decision-maker utility, maximizing group qualification, and minimizing the difference in qualification between groups. We provide a RL policy learning technique that optimizes for these objectives directly by constructing a multi-objective reward function that encodes these objectives as distinct reward signals. Under suitable parameterizations our approach is guaranteed to respect the "no-harm" principle. 
    more » « less
  3. Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems. Prior work has translated these definitions to other machine learning environments, such as unsupervised learning and reinforcement learning, by implementing their closest mathematical equivalent. As a result, there are numerous bespoke interpretations of these definitions. This work aims to unify the shared aspects of each of these bespoke definitions, and to this end we provide a group fairness framework that generalizes beyond just classification problems. We leverage two fairness principles that enable this generalization. First, our framework measures outcomes in terms of utilities, rather than predictions, and does so for both the decision-maker and the individual. Second, our framework can consider counterfactual outcomes, rather than just observed outcomes, thus preventing loopholes where fairness criteria are satisfied through self-fulfilling prophecies. We provide concrete examples of how our utility fairness framework avoids these assumptions and thus naturally integrates with classification, clustering, and reinforcement learning fairness problems. We also show that many of the bespoke interpretations of Demographic Parity and Equal Opportunity fit nicely as special cases of our framework. 
    more » « less